14 research outputs found

    First-passage phenomena in hierarchical networks

    Get PDF
    In this paper we study Markov processes and related first passage problems on a class of weighted, modular graphs which generalize the Dyson hierarchical model. In these networks, the coupling strength between two nodes depends on their distance and is modulated by a parameter σ\sigma. We find that, in the thermodynamic limit, ergodicity is lost and the "distant" nodes can not be reached. Moreover, for finite-sized systems, there exists a threshold value for σ\sigma such that, when σ\sigma is relatively large, the inhomogeneity of the coupling pattern prevails and "distant" nodes are hardly reached. The same analysis is carried on also for generic hierarchical graphs, where interactions are meant to involve pp-plets (p>2p>2) of nodes, finding that ergodicity is still broken in the thermodynamic limit, but no threshold value for σ\sigma is evidenced, ultimately due to a slow growth of the network diameter with the size

    Phase transition for the Maki-Thompson rumour model on a small-world network

    Full text link
    We consider the Maki-Thompson model for the stochastic propagation of a rumour within a population. We extend the original hypothesis of homogenously mixed population by allowing for a small-world network embedding the model. This structure is realized starting from a kk-regular ring and by inserting, in the average, cc additional links in such a way that kk and cc are tuneable parameter for the population architecture. We prove that this system exhibits a transition between regimes of localization (where the final number of stiflers is at most logarithmic in the population size) and propagation (where the final number of stiflers grows algebraically with the population size) at a finite value of the network parameter cc. A quantitative estimate for the critical value of cc is obtained via extensive numerical simulations.Comment: 24 pages, 4 figure

    A walk in the statistical mechanical formulation of neural networks

    Full text link
    Neural networks are nowadays both powerful operational tools (e.g., for pattern recognition, data mining, error correction codes) and complex theoretical models on the focus of scientific investigation. As for the research branch, neural networks are handled and studied by psychologists, neurobiologists, engineers, mathematicians and theoretical physicists. In particular, in theoretical physics, the key instrument for the quantitative analysis of neural networks is statistical mechanics. From this perspective, here, we first review attractor networks: starting from ferromagnets and spin-glass models, we discuss the underlying philosophy and we recover the strand paved by Hopfield, Amit-Gutfreund-Sompolinky. One step forward, we highlight the structural equivalence between Hopfield networks (modeling retrieval) and Boltzmann machines (modeling learning), hence realizing a deep bridge linking two inseparable aspects of biological and robotic spontaneous cognition. As a sideline, in this walk we derive two alternative (with respect to the original Hebb proposal) ways to recover the Hebbian paradigm, stemming from ferromagnets and from spin-glasses, respectively. Further, as these notes are thought of for an Engineering audience, we highlight also the mappings between ferromagnets and operational amplifiers and between antiferromagnets and flip-flops (as neural networks -built by op-amp and flip-flops- are particular spin-glasses and the latter are indeed combinations of ferromagnets and antiferromagnets), hoping that such a bridge plays as a concrete prescription to capture the beauty of robotics from the statistical mechanical perspective.Comment: Contribute to the proceeding of the conference: NCTA 2014. Contains 12 pages,7 figure

    Meta-stable states in the hierarchical Dyson model drive parallel processing in the hierarchical Hopfield network

    Full text link
    In this paper we introduce and investigate the statistical mechanics of hierarchical neural networks: First, we approach these systems \`a la Mattis, by thinking at the Dyson model as a single-pattern hierarchical neural network and we discuss the stability of different retrievable states as predicted by the related self-consistencies obtained from a mean-field bound and from a bound that bypasses the mean-field limitation. The latter is worked out by properly reabsorbing fluctuations of the magnetization related to higher levels of the hierarchy into effective fields for the lower levels. Remarkably, mixing Amit's ansatz technique (to select candidate retrievable states) with the interpolation procedure (to solve for the free energy of these states) we prove that (due to gauge symmetry) the Dyson model accomplishes both serial and parallel processing. One step forward, we extend this scenario toward multiple stored patterns by implementing the Hebb prescription for learning within the couplings. This results in an Hopfield-like networks constrained on a hierarchical topology, for which, restricting to the low storage regime (where the number of patterns grows at most logarithmical with the amount of neurons), we prove the existence of the thermodynamic limit for the free energy and we give an explicit expression of its mean field bound and of the related improved boun

    Topological properties of hierarchical networks

    Get PDF
    Hierarchical networks are attracting a renewal interest for modelling the organization of a number of biological systems and for tackling the complexity of statistical mechanical models beyond mean-field limitations. Here we consider the Dyson hierarchical construction for ferromagnets, neural networks and spin-glasses, recently analyzed from a statistical-mechanics perspective, and we focus on the topological properties of the underlying structures. In particular, we find that such structures are weighted graphs that exhibit high degree of clustering and of modularity, with small spectral gap; the robustness of such features with respect to link removal is also studied. These outcomes are then discussed and related to the statistical mechanics scenario in full consistency. Lastly, we look at these weighted graphs as Markov chains and we show that in the limit of infinite size, the emergence of ergodicity breakdown for the stochastic process mirrors the emergence of meta-stabilities in the corresponding statistical mechanical analysis

    From Dyson to Hopfield: Processing on hierarchical networks

    Get PDF
    We consider statistical-mechanical models for spin systems built on hierarchical structures, which provide a simple example of non-mean-field framework. We show that the coupling decay with spin distance can give rise to peculiar features and phase diagrams much richer that their mean-field counterpart. In particular, we consider the Dyson model, mimicking ferromagnetism in lattices, and we prove the existence of a number of meta-stabilities, beyond the ordered state, which get stable in the thermodynamic limit. Such a feature is retained when the hierarchical structure is coupled with the Hebb rule for learning, hence mimicking the modular architecture of neurons, and gives rise to an associative network able to perform both as a serial processor as well as a parallel processor, depending crucially on the external stimuli and on the rate of interaction decay with distance; however, those emergent multitasking features reduce the network capacity with respect to the mean-field counterpart. The analysis is accomplished through statistical mechanics, graph theory, signal-to-noise technique and numerical simulations in full consistency. Our results shed light on the biological complexity shown by real networks, and suggest future directions for understanding more realistic models

    Hierarchical neural networks perform both serial and parallel processing

    Get PDF
    In this work we study a Hebbian neural network, where neurons are arranged according to a hierarchical architecture such that their couplings scale with their reciprocal distance. As a full statistical mechanics solution is not yet available, after a streamlined introduction to the state of the art via that route, the problem is consistently approached through signal- to-noise technique and extensive numerical simulations. Focusing on the low-storage regime, where the amount of stored patterns grows at most logarithmical with the system size, we prove that these non-mean-field Hopfield-like networks display a richer phase diagram than their classical counterparts. In particular, these networks are able to perform serial processing (i.e. retrieve one pattern at a time through a complete rearrangement of the whole ensemble of neurons) as well as parallel processing (i.e. retrieve several patterns simultaneously, delegating the management of diff erent patterns to diverse communities that build network). The tune between the two regimes is given by the rate of the coupling decay and by the level of noise affecting the system. The price to pay for those remarkable capabilities lies in a network's capacity smaller than the mean field counterpart, thus yielding a new budget principle: the wider the multitasking capabilities, the lower the network load and viceversa. This may have important implications in our understanding of biological complexity

    Motifs stability in hierarchical modular networks

    No full text
    Recent advances in our understanding of information processing in biological systems have highlighted the importance of modularity in the underlying networks (ranging from metabolic to neural networks), as well as the crucial existence of motifs, namely small circuits (not necessary loopy) whose empirical presence in these networks is statistically high. In these notes, mixing statistical mechanical with graph theoretical perspectives and restricting on hierarchical modular networks, we analyze the stability of key motifs that naturally emerge and we prove that all the loopy structures have systematically a broader steadiness with respect to loop-free motifs
    corecore